9 research outputs found

    Identifying and removing heterogeneities between monitoring networks

    No full text
    There is an increased interest in merging observations from local networks into larger national and international databases. However, the observations from different networks have typically been made using different equipment and applying different post-processing of the values. These heterogeneities in recorded values between networks can lead to inconsistencies between different networks, and to discontinuities at the borders between regions if the observations are used as a source for interpolated maps of the process. Such discontinuities are undesirable, and could create difficulties in interpreting the maps by decision makers. In this paper, we present two variants of a method that can be used to identify and quantify differences between networks. The first variant deals with networks sharing the same region (usually multiple networks within a country) while the second variant deals with networks in neighbouring regions (usually networks in different countries). The estimated differences can be used to estimate individual biases for each network, which can be subtracted as a harmonization procedure. The method was applied to European gamma dose rate (GDR) measurements from May 2008 from the European Radiological Data Exchange Platform (EURDEP) database. Data from the Slovenian GDR network are used for an application of the first variant of the method whereas the complete dataset is used to illustrate the second variant. The results indicate that these two variants are able to identify and quantify biases reliably, and the interpolated maps after subtraction of the estimated biases appear more reliable than maps created on the basis of the recorded dat

    Data harmonization with geostatistical tools: a Bayesian extension

    No full text
    Mapping at an international scale may suffer from biasedness due to systematic differences in measurement devices and procedures. Biases show up when interpolating the target variable across borders. Harmonization of multinational datasets is therefore important and becomes a compulsory preprocessing step prior to the geostatistical mapping and analysis. This paper explores the possibility of automatic removal of systematic biases between different measurement networks. We extend the method proposed by Baume et al. (2008) to a Bayesian setting. Sources of heterogeneities in the data ¿ measurement device types, data handling methods and site criteria for instance ¿ are taken into account in a global setting using a linear regression Kriging model. The model introduces both measurement and state variables, incorporating in the trend bias factors as well as natural drifts. Analytical solutions are given of posterior distribution of the biases in the Gaussian case and applied to a case study. In the harmonization context, prior information on network biases may be more specifically treated in two ways. Best is to rely on expert knowledge but in the case of missing information an alternative local estimation method introduced by Skøien et al. (2008) is used. An example is taken from mapping radioactivity exposure across European countries. Results show that significant bias is present in the radioactivity data and can successfully be removed with the networks presented

    A geostatistical approach to data harmonization - Application to radioactivity exposure data

    No full text
    Environmental issues such as air, groundwater pollution and climate change are frequently studied at spatial scales that cross boundaries between political and administrative regions. It is common for different administrations to employ different data collection methods. If these differences are not taken into account in spatial interpolation procedures then biases may appear and cause unrealistic results. The resulting maps may show misleading patterns and lead to wrong interpretations. Also, errors will propagate when these maps are used as input to environmental process models. In this paper we present and apply a geostatistical model that generalizes the universal kriging model such that it can handle heterogeneous data sources. The associated best linear unbiased estimation and prediction (BLUE and BLUP) equations are presented and it is shown that these lead to harmonized maps from which estimated biases are removed. The methodology is illustrated with an example of country bias removal in a radioactivity exposure assessment for four European countries. The application also addresses multicollinearity problems in data harmonization, which arise when both artificial bias factors and natural drifts are present and cannot easily be distinguished. Solutions for handling multicollinearity are suggested and directions for further investigations proposed

    Uncertainty propagation in chained web based modelling services: the case of eHabitat

    No full text
    eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with similar conditions. Starting from a reference area, typically a protected area, one can compute for each pixel of a region of interest the probability to find a combination of a set of predefined environmental indicators that is similar to the one observed in the reference area using the Mahalanobis distances to the mean and covariance of these indicators. Inputs to the WPS are thus the reference polygon and a set of environmental indicators, typically thematic geospatial “layers”, which can be discovered using standardised catalogues. The outputs can be tailored to specific end user needs in terms of data format and data resolution. Because these input layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties which are inevitably propagated through the service (see e.g. Heuvelink, 1998). Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. Such a configuration of distributed data and model services as envisaged by initiatives such as the “Model Web” from the Group on Earth Observations, to be of any use to policy or decision makers, requires from users clear information on data uncertainties. The development of such an Uncertainty-Enabled Model Web is the scope of the UncertWEB project which is promoting interoperability between data and models with quantified uncertainty and building a framework on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.JRC.H.3-Global environement monitorin

    INTAMAP: The design and implementation of an interoperable automated interpolation web service

    No full text
    INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52 degrees North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process
    corecore